我们提出了一种方法,用于在主动电分布网络中考虑使用脆弱节点识别的最佳DERS分配,并将这些节点命名为关键节点。这些关键节点的功率变化将显着影响其他链接节点的运行,因此这些节点适合使用,并且认为最适合DERS放置。我们在标准的IEEE-123测试馈线系统中证明了我们的方法评估。最初,我们使用图理论将分布系统划分为最佳微电网网络。使用图神经网络体系结构对分区进行了验证,以适当形成微电网。此外,使用有效的可测量分析(例如Granger因果关系),我们确定了分区的微电网中的关键节点和在这些节点上的DERS放置,从而提高了网络可靠性和弹性。此外,为了验证系统性能和能量弹性,我们计算了微电网网络的渗透阈值,该网络指示了在这些关键节点上掺入DER后系统弹性。这项提出的有关首先的方法可确保通过分布网络中数据驱动的分析方法来确定有效的微电网分配,关键节点的识别,最佳DERS分配和系统弹性评估。
translated by 谷歌翻译
赤道等离子体气泡(EPB)是低密度血浆的羽毛,它们从F层的底部升至Exosphere。 EPB是无线电波闪烁的已知原因,可以降低与航天器的通信。我们构建了一个随机的森林回归剂,以预测和预测IBI处理器在船上检测到的EPB [0-1]的可能性。我们使用从2014年到2021年的8年群数据,并将数据从时间序列转换为5维空间,该空间包括纬度,经度,MLT,年份和年度。我们还增加了KP,F10.7厘米和太阳风速。关于地理位置,当地时间,季节和太阳活动的EPB的观察主要与现有工作一致,而链接的地磁活动尚不清楚。该预测的精度为88%,并且在EPB特异性时空尺度上的性能很好。这证明了XGBoost方法能够成功捕获群EPB的气候和每日变异性。由于电离层内的局部和随机特征,捕获每日方差长期以来一直逃避研究人员。我们利用Shapley值来解释该模型并深入了解EPB的物理学。我们发现,随着太阳能速度的增加,EPB的概率降低。我们还确定了EPB概率周围的尖峰。这两个见解直接源自XGBoost和Shapley技术。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
我们如何才能训练辅助人机接口(例如,基于肌电图的肢体假体),将用户的原始命令信号转换为机器人或计算机的动作,如果没有事先映射,我们不能要求用户进行监督动作标签或奖励反馈的形式,我们对用户试图完成的任务没有事先了解?本文中的关键想法是,无论任务如何,当接口更直观时,用户的命令就会不那么嘈杂。我们将这一想法形式化为一个完全无监督的目标,以优化接口:用户的命令信号与环境中的诱导状态过渡之间的相互信息。为了评估此相互信息得分是否可以区分有效的界面和无效界面,我们对540K的示例进行了观察性研究,该示例的用户操作各种键盘和眼睛凝视接口,用于打字,控制模拟机器人和玩视频游戏。结果表明,我们的共同信息得分可预测各个领域中的基础任务完成指标,而Spearman的平均等级相关性为0.43。除了对现有接口的离线评估外,我们还使用无监督的目标从头开始学习接口:我们随机初始化接口,让用户尝试使用接口执行其所需的任务,测量相互信息得分并更新接口通过强化学习最大化相互信息。我们通过用户研究与12名参与者进行用户研究评估我们的方法,他们使用扰动的鼠标执行2D光标控制任务,并使用手势使用手势的一个用户玩《 Lunar Lander》游戏的实验。结果表明,我们可以在30分钟内从头开始学习一个接头,无需任何用户监督或任务的先验知识。
translated by 谷歌翻译
Humans gather information through conversations involving a series of interconnected questions and answers. For machines to assist in information gathering, it is therefore essential to enable them to answer conversational questions. We introduce CoQA, a novel dataset for building Conversational Question Answering systems. 1 Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. We analyze CoQA in depth and show that conversational questions have challenging phenomena not present in existing reading comprehension datasets, e.g., coreference and pragmatic reasoning. We evaluate strong dialogue and reading comprehension models on CoQA. The best system obtains an F1 score of 65.4%, which is 23.4 points behind human performance (88.8%), indicating there is ample room for improvement. We present CoQA as a challenge to the community at https://stanfordnlp. github.io/coqa.
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译
The latent space of autoencoders has been improved for clustering image data by jointly learning a t-distributed embedding with a clustering algorithm inspired by the neighborhood embedding concept proposed for data visualization. However, multivariate tabular data pose different challenges in representation learning than image data, where traditional machine learning is often superior to deep tabular data learning. In this paper, we address the challenges of learning tabular data in contrast to image data and present a novel Gaussian Cluster Embedding in Autoencoder Latent Space (G-CEALS) algorithm by replacing t-distributions with multivariate Gaussian clusters. Unlike current methods, the proposed approach independently defines the Gaussian embedding and the target cluster distribution to accommodate any clustering algorithm in representation learning. A trained G-CEALS model extracts a quality embedding for unseen test data. Based on the embedding clustering accuracy, the average rank of the proposed G-CEALS method is 1.4 (0.7), which is superior to all eight baseline clustering and cluster embedding methods on seven tabular data sets. This paper shows one of the first algorithms to jointly learn embedding and clustering to improve multivariate tabular data representation in downstream clustering.
translated by 谷歌翻译
An unbiased scene graph generation (SGG) algorithm referred to as Skew Class-balanced Re-weighting (SCR) is proposed for considering the unbiased predicate prediction caused by the long-tailed distribution. The prior works focus mainly on alleviating the deteriorating performances of the minority predicate predictions, showing drastic dropping recall scores, i.e., losing the majority predicate performances. It has not yet correctly analyzed the trade-off between majority and minority predicate performances in the limited SGG datasets. In this paper, to alleviate the issue, the Skew Class-balanced Re-weighting (SCR) loss function is considered for the unbiased SGG models. Leveraged by the skewness of biased predicate predictions, the SCR estimates the target predicate weight coefficient and then re-weights more to the biased predicates for better trading-off between the majority predicates and the minority ones. Extensive experiments conducted on the standard Visual Genome dataset and Open Image V4 \& V6 show the performances and generality of the SCR with the traditional SGG models.
translated by 谷歌翻译
In this paper we discuss the theory used in the design of an open source lightmorphic signatures analysis toolkit (LSAT). In addition to providing a core functionality, the software package enables specific optimizations with its modular and customizable design. To promote its usage and inspire future contributions, LSAT is publicly available. By using a self-supervised neural network and augmented machine learning algorithms, LSAT provides an easy-to-use interface with ample documentation. The experiments demonstrate that LSAT improves the otherwise tedious and error-prone tasks of translating lightmorphic associated data into usable spectrograms, enhanced with parameter tuning and performance analysis. With the provided mathematical functions, LSAT validates the nonlinearity encountered in the data conversion process while ensuring suitability of the forecasting algorithms.
translated by 谷歌翻译